目的:心电图(ECG)信号通常会遭受噪声干扰,例如基线徘徊。心电图信号的高质量和高保真重建对于诊断心血管疾病具有重要意义。因此,本文提出了一种新型的心电图基线徘徊和降噪技术。方法:我们以特定于心电图信号的条件方式扩展模型,即心电图基线徘徊和噪声去除(Descod-ECG)的基于深度分数的扩散模型。此外,我们部署了一个多拍的平均策略,以改善信号重建。我们在QT数据库和MIT-BIH噪声应力测试数据库上进行了实验,以验证该方法的可行性。采用基线方法进行比较,包括传统的基于数字过滤器和基于深度学习的方法。结果:数量评估结果表明,所提出的方法在四个基于距离的相似性指标(平方距离的总和,最大绝对正方形,根距离的百分比和余弦相似性)上获得了出色的性能,并具有3.771 $ \ pm $ 5.713 au,$ 5.713 au, 0.329 $ \ pm $ 0.258 au,40.527 $ \ pm $ 26.258 \%和0.926 $ \ pm $ 0.087。与最佳基线方法相比,这至少导致了至少20%的总体改进。结论:本文证明了Descod-ECG的最新性能用于ECG噪声,该噪声可以更好地近似真实的数据分布和在极端噪声腐败下较高的稳定性。意义:这项研究是最早扩展基于条件扩散的生成模型以去除ECG噪声的研究之一,并且Descod-ECG具有广泛用于生物医学应用的潜力。
translated by 谷歌翻译
Often clickbait articles have a title that is phrased as a question or vague teaser that entices the user to click on the link and read the article to find the explanation. We developed a system that will automatically find the answer or explanation of the clickbait hook from the website text so that the user does not need to read through the text themselves. We fine-tune an extractive question and answering model (RoBERTa) and an abstractive one (T5), using data scraped from the 'StopClickbait' Facebook pages and Reddit's 'SavedYouAClick' subforum. We find that both extractive and abstractive models improve significantly after finetuning. We find that the extractive model performs slightly better according to ROUGE scores, while the abstractive one has a slight edge in terms of BERTscores.
translated by 谷歌翻译
We propose a fully unsupervised method to detect bias in contextualized embeddings. The method leverages the assortative information latently encoded by social networks and combines orthogonality regularization, structured sparsity learning, and graph neural networks to find the embedding subspace capturing this information. As a concrete example, we focus on the phenomenon of ideological bias: we introduce the concept of an ideological subspace, show how it can be found by applying our method to online discussion forums, and present techniques to probe it. Our experiments suggest that the ideological subspace encodes abstract evaluative semantics and reflects changes in the political left-right spectrum during the presidency of Donald Trump.
translated by 谷歌翻译
Just like in humans vision plays a fundamental role in guiding adaptive locomotion, when designing the control strategy for a walking assistive technology, Computer Vision may bring substantial improvements when performing an environment-based assistance modulation. In this work, we developed a hip exosuit controller able to distinguish among three different walking terrains through the use of an RGB camera and to adapt the assistance accordingly. The system was tested with seven healthy participants walking throughout an overground path comprising of staircases and level ground. Subjects performed the task with the exosuit disabled (Exo Off), constant assistance profile (Vision Off ), and with assistance modulation (Vision On). Our results showed that the controller was able to promptly classify in real-time the path in front of the user with an overall accuracy per class above the 85%, and to perform assistance modulation accordingly. Evaluation related to the effects on the user showed that Vision On was able to outperform the other two conditions: we obtained significantly higher metabolic savings than Exo Off, with a peak of about -20% when climbing up the staircase and about -16% in the overall path, and than Vision Off when ascending or descending stairs. Such advancements in the field may yield to a step forward for the exploitation of lightweight walking assistive technologies in real-life scenarios.
translated by 谷歌翻译
An effective aggregation of node features into a graph-level representation via readout functions is an essential step in numerous learning tasks involving graph neural networks. Typically, readouts are simple and non-adaptive functions designed such that the resulting hypothesis space is permutation invariant. Prior work on deep sets indicates that such readouts might require complex node embeddings that can be difficult to learn via standard neighborhood aggregation schemes. Motivated by this, we investigate the potential of adaptive readouts given by neural networks that do not necessarily give rise to permutation invariant hypothesis spaces. We argue that in some problems such as binding affinity prediction where molecules are typically presented in a canonical form it might be possible to relax the constraints on permutation invariance of the hypothesis space and learn a more effective model of the affinity by employing an adaptive readout function. Our empirical results demonstrate the effectiveness of neural readouts on more than 40 datasets spanning different domains and graph characteristics. Moreover, we observe a consistent improvement over standard readouts (i.e., sum, max, and mean) relative to the number of neighborhood aggregation iterations and different convolutional operators.
translated by 谷歌翻译
深度学习算法的最新进展为解决许多医学图像分析问题带来了重大好处。培训深度学习模型通常需要具有专家标记注释的大型数据集。但是,获取专家标记的注释不仅昂贵,而且主观,容易出错,并且观察者内部变异性会引入标签。由于解剖学的模棱两可,使用深度学习模型来细分医学图像时,这尤其是一个问题。基于图像的医学诊断工具使用经过不正确分段标签训练的深度学习模型可以导致错误的诊断和治疗建议。与单评论注释相比,多评价者注释可能更适合于使用小型培训集的深度学习模型进行训练。本文的目的是开发和评估一种基于MRI中病变特征的多评价者注释和解剖学知识来生成概率标签的方法,以及一种使用概率的标签使用归一化活动性损失作为A的病变特征的解剖学知识,以训练分割模型”。耐噪声损失的功能。通过将17个膝盖MRI扫描的二进制基础真理进行比较,以评估该模型,以用于临床分割和检测骨髓病变(BML)。该方法与二进制跨透镜损失函数相比,该方法成功提高了精度14,召回22和骰子得分8%。总体而言,这项工作的结果表明,使用软标签的拟议归一化主动损失成功地减轻了嘈杂标签的影响。
translated by 谷歌翻译
解释性学者通过手动采样文档,应用代码以及将代码精炼和整理成类别,直到出现有意义的主题,从而从文本语料库中产生知识。鉴于大量的语料库,机器学习可以帮助扩展此数据采样和分析,但先前的研究表明,专家通常关注算法可能破坏或推动解释性奖学金。我们采用以人为本的设计方法来解决围绕机器辅助解释性研究的关注,以构建学术研究,该研究将机器中的集群算法纳入了脚手架解释性文本分析。随着学者将代码应用于文档和完善它们,所得编码的模式用作结构化元数据,该元数据限制了从语料库推断出的层次文档和单词簇。这些集群的交互式可视化可以帮助学者们战略性地对文档进行进一步的洞察力进行洞察力。 Scholastic证明了采用熟悉隐喻的以人为中心的算法设计和可视化如何通过交互式主题建模和文档群集来支持归纳和解释性研究方法。
translated by 谷歌翻译
Geographic features are commonly used to improve the performance of pretrained language models (PLMs) on NLP tasks where they are intuitively beneficial (e.g., geolocation prediction, dialect feature prediction). Existing methods, however, leverage geographic information in task-specific fine-tuning and fail to integrate it into the geo-linguistic knowledge encoded by PLMs, which would make it transferable across different tasks. In this paper, we introduce an approach to task-agnostic geoadaptation of PLMs that forces them to learn associations between linguistic phenomena and geographic locations. Geoadaptation is an intermediate training step that couples language modeling and geolocation prediction in a multi-task learning setup. In our main set of experiments, we geoadapt BERTi\'{c}, a PLM for Bosnian-Croatian-Montenegrin-Serbian (BCMS), using a corpus of geotagged BCMS tweets. Evaluation on three tasks, namely fine-tuned as well as zero-shot geolocation prediction and zero-shot prediction of dialect features, shows that geoadaptation is very effective: e.g., we obtain state-of-the-art performance in supervised geolocation prediction and report massive gains over geographically uninformed PLMs on zero-shot geolocation prediction. Moreover, in follow-up experiments we successfully geoadapt two other PLMs, specifically ScandiBERT on Norwegian, Swedish, and Danish tweets and GermanBERT on Jodel posts in German from Austria, Germany, and Switzerland, proving that the benefits of geoadaptation are not limited to a particular language area and PLM.
translated by 谷歌翻译
标记数据是大多数自然语言处理任务的基础。但是,标记数据很困难,并且通常对正确的数据标签应该是什么不同的有效信念。到目前为止,数据集创建者已承认注释主观性,但在注释过程中没有主动管理它。这导致部分主观的数据集未能提供明确的下游使用。要解决此问题,我们提出了两个对比的数据注释范式。描述性范式鼓励注释主观性,而规定的范式则劝阻。描述性注释允许对不同信念进行测量和建模,而规定的注释使得能够培训持续应用一个信仰的模型。我们讨论实施宗旨的福利和挑战,并争辩说,数据集创建者应该明确瞄准一个或另一个,以促进其数据集的预期使用。最后,我们设计了一个注释实验,以说明两种范例之间的对比。
translated by 谷歌翻译
当由于模型的复杂性或数据丰富而不是可行的,LAPPAlt方法,LAPPAlt近似和变分方法等近似推断方法是流行的方法。在本文中,我们提出了一种混合近似方法,即低秩变分贝叶斯校正(VBC),其使用LAPLACE方法并随后对后轴进行变分贝叶斯校正。这项成本基本上是Laplace方法确保该方法可扩展性的方法。我们用模拟和实际数据说明了该方法及其优势,小而大规模。
translated by 谷歌翻译